ChestLive

نویسندگان

چکیده

Voice-based authentication is prevalent on smart devices to verify the legitimacy of users, but vulnerable replay attacks. In this paper, we propose leverage distinctive chest motions during speaking establish a secure multi-factor system, named ChestLive. Compared with other biometric-based systems, ChestLive does not require users remember any complicated information (e.g., hand gestures, doodles) and working distance much longer (30cm). We use acoustic sensing monitor built-in speaker microphone smartphones. To obtain fine-grained motion signals for reliable user authentication, derive Channel Energy (CE) capture movement, then remove static non-static interference from aggregated CE signals. Representative features are extracted correlation between voice signal corresponding signal. Unlike learning-based image or speech recognition models millions available training samples, our system needs deal limited number samples legitimate enrollment. address problem, resort meta-learning, which initializes general model good generalization property that can be quickly fine-tuned identify new user. implement as an application evaluate its performance in wild 61 volunteers using their Experiment results show achieves accuracy 98.31% less than 2% false accept rate against attacks impersonation also validate robust various factors, including set size, distance, angle, posture, phone models, environment noises.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies

سال: 2021

ISSN: ['2474-9567']

DOI: https://doi.org/10.1145/3494962